4 Learning , Regret minimization , and Equilibria

نویسندگان

  • A. Blum
  • Y. Mansour
چکیده

Many situations involve repeatedly making decisions in an uncertain environment: for instance, deciding what route to drive to work each day, or repeated play of a game against an opponent with an unknown strategy. In this chapter we describe learning algorithms with strong guarantees for settings of this type, along with connections to game-theoretic equilibria when all players in a system are simultaneously adapting in such a manner. We begin by presenting algorithms for repeated play of a matrix game with the guarantee that against any opponent, they will perform nearly as well as the best fixed action in hindsight (also called the problem of combining expert advice or minimizing external regret). In a zero-sum game, such algorithms are guaranteed to approach or exceed the minimax value of the game, and even provide a simple proof of the minimax theorem. We then turn to algorithms that minimize an even stronger form of regret, known as internal or swap regret. We present a general reduction showing how to convert any algorithm for minimizing external regret to one that minimizes this stronger form of regret as well. Internal regret is important because when all players in a game minimize this stronger type of regret, the empirical distribution of play is known to converge to correlated equilibrium. The third part of this chapter explains a different reduction: how to convert from the full information setting in which the action chosen by the opponent is revealed after each time step, to the partial information (bandit) setting, where at each time step only the payoff of the selected action is observed (such as in routing), and still maintain a small external regret. Finally, we end by discussing routing games in the Wardrop model, where one can show that if all participants minimize their own external regret, then 4

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

No-Φ-Regret: A Connection between Computational Learning Theory and Game Theory

This paper explores a fundamental connection between computational learning theory and game theory through a property we call no-Φ-regret. Given a set of transformations Φ (i.e., mappings from actions to actions), a learning algorithm is said to exhibit no Φ-regret if an agent experiences no regret for playing the actions the algorithm prescribes, rather than playing the transformed actions pre...

متن کامل

A General Class of No-Regret Learning Algorithms and Game-Theoretic Equilibria

A general class of no-regret learning algorithms, called no-Φ-regret learning algorithms, is defined which spans the spectrum from no-external-regret learning to no-internal-regret learning and beyond. The set Φ describes the set of strategies to which the play of a given learning algorithm is compared. A learning algorithm satisfies no-Φ-regret if no regret is experienced for playing as the al...

متن کامل

CS 364 A : Algorithmic Game Theory Lecture # 16 : Best - Response Dynamics ∗

Affirmative answers to these questions are important because they justify equilibrium analysis. Properties of equilibria, such as a near-optimal objective function value, are not obviously relevant when players fail to find one. More generally, proving that natural learning algorithms converge quickly to an equilibrium lends plausibility to the predictive power of an equilibrium concept. To rea...

متن کامل

Fast Convergence to - PNE in Symmetric Routing

Affirmative answers to these questions are important because they justify equilibrium analysis. Properties of equilibria, such as a near-optimal objective function value, are not obviously relevant when players fail to find one. More generally, proving that natural learning algorithms converge quickly to an equilibrium lends plausibility to the predictive power of an equilibrium concept. To rea...

متن کامل

Hedging Under Uncertainty: Regret Minimization Meets Exponentially Fast Convergence

This paper examines the problem of multi-agent learning in N -person non-cooperative games. For concreteness, we focus on the socalled “hedge” variant of the exponential weights (EW) algorithm, one of the most widely studied algorithmic schemes for regret minimization in online learning. In this multi-agent context, we show that a) dominated strategies become extinct (a.s.); and b) in generic g...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2007